signal data
Masked Training for Robust Arrhythmia Detection from Digitalized Multiple Layout ECG Images
Zhang, Shanwei, Zhang, Deyun, Tao, Yirao, Wang, Kexin, Geng, Shijia, Li, Jun, Zhao, Qinghao, Liu, Xingpeng, Zhou, Yuxi, Hong, Shenda
Electrocardiogram (ECG) as an important tool for diagnosing cardiovascular diseases such as arrhythmia. Due to the differences in ECG layouts used by different hospitals, the digitized signals exhibit asynchronous lead time and partial blackout loss, which poses a serious challenge to existing models. To address this challenge, the study introduced PatchECG, a framework for adaptive variable block count missing representation learning based on a masking training strategy, which automatically focuses on key patches with collaborative dependencies between leads, thereby achieving key recognition of arrhythmia in ECGs with different layouts. Experiments were conducted on the PTB-XL dataset and 21388 asynchronous ECG images generated using ECG image kit tool, using the 23 Subclasses as labels. The proposed method demonstrated strong robustness under different layouts, with average Area Under the Receiver Operating Characteristic Curve (AUROC) of 0.835 and remained stable (unchanged with layout changes). In external validation based on 400 real ECG images data from Chaoyang Hospital, the AUROC for atrial fibrillation diagnosis reached 0.778; On 12 x 1 layout ECGs, AUROC reaches 0.893. This result is superior to various classic interpolation and baseline methods, and compared to the current optimal large-scale pre-training model ECGFounder, it has improved by 0.111 and 0.19.
- Asia > China > Beijing > Beijing (0.05)
- Asia > China > Tianjin Province > Tianjin (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
FERMI: Flexible Radio Mapping with a Hybrid Propagation Model and Scalable Autonomous Data Collection
Luo, Yiming, Wang, Yunfei, Chen, Hongming, Wu, Chengkai, Lyu, Ximin, Zhou, Jinni, Ma, Jun, Zhang, Fu, Zhou, Boyu
FERMI: Flexible Radio Mapping with a Hybrid Propagation Model and Scalable Autonomous Data Collection Yiming Luo 1, 2, Y unfei Wang 3, Hongming Chen 4, Chengkai Wu 3, Ximin Lyu 4, Jinni Zhou 3, Jun Ma 3, Fu Zhang 2, Boyu Zhou 1 1 Southern University of Science and Technology, 2 The University of Hong Kong, 3 Hong Kong University of Science and Technology (Guangzhou), 4 Sun Y at-Sen University Abstract --Communication is fundamental for multi-robot collaboration, with accurate radio mapping playing a crucial role in predicting signal strength between robots. However, modeling radio signal propagation in large and occluded environments is challenging due to complex interactions between signals and obstacles. Existing methods face two key limitations: they struggle to predict signal strength for transmitter-receiver pairs not present in the training set, while also requiring extensive manual data collection for modeling, making them impractical for large, obstacle-rich scenarios. T o overcome these limitations, we propose FERMI, a flexible radio mapping framework. FERMI combines physics-based modeling of direct signal paths with a neural network to capture environmental interactions with radio signals. This hybrid model learns radio signal propagation more efficiently, requiring only sparse training data. Additionally, FERMI introduces a scalable planning method for autonomous data collection using a multi-robot team. By increasing parallelism in data collection and minimizing robot travel costs between regions, overall data collection efficiency is significantly improved. Experiments in both simulation and real-world scenarios demonstrate that FERMI enables accurate signal prediction and generalizes well to unseen positions in complex environments. It also supports fully autonomous data collection and scales to different team sizes, offering a flexible solution for creating radio maps. I NTRODUCTION Communication plays a critical role in multi-robot cooperative tasks such as exploration and environment inspection [24, 49]. Stable communication is essential not only for information sharing and task allocation within robot teams [7, 54] but also for effective interaction between robots and human operators [43]. This demand has driven the development of adaptive communication strategies using robot routers [12, 48]. However, the effectiveness of such strategies is significantly constrained by the current limitations in radio mapping. Radio mapping, which aims to predict signal strength at a receiver based on the positions of the transmitter (Tx) and receiver (Rx), relies on accurately modeling wireless signal propagation in the environment.
- Asia > China > Hong Kong (0.44)
- Asia > China > Guangdong Province > Guangzhou (0.24)
- North America > United States (0.04)
CLaSP: Learning Concepts for Time-Series Signals from Natural Language Supervision
Ito, Aoi, Dohi, Kota, Kawaguchi, Yohei
This paper proposes a foundation model called "CLaSP" that can search time series signals using natural language that describes the characteristics of the signals as queries. Previous efforts to represent time series signal data in natural language have had challenges in designing a conventional class of time series signal characteristics, formulating their quantification, and creating a dictionary of synonyms. To overcome these limitations, the proposed method introduces a neural network based on contrastive learning. This network is first trained using the datasets TRUCE and SUSHI, which consist of time series signals and their corresponding natural language descriptions. Previous studies have proposed vocabularies that data analysts use to describe signal characteristics, and SUSHI was designed to cover these terms. We believe that a neural network trained on these datasets will enable data analysts to search using natural language vocabulary. Furthermore, our method does not require a dictionary of predefined synonyms, and it leverages common sense knowledge embedded in a large-scale language model (LLM). Experimental results demonstrate that CLaSP enables natural language search of time series signal data and can accurately learn the points at which signal data changes.
EEG-DIF: Early Warning of Epileptic Seizures through Generative Diffusion Model-based Multi-channel EEG Signals Forecasting
Jiang, Zekun, Dai, Wei, Wei, Qu, Qin, Ziyuan, Li, Kang, Zhang, Le
Multi-channel EEG signals are commonly used for the diagnosis and assessment of diseases such as epilepsy. Currently, various EEG diagnostic algorithms based on deep learning have been developed. However, most research efforts focus solely on diagnosing and classifying current signal data but do not consider the prediction of future trends for early warning. Additionally, since multi-channel EEG can be essentially regarded as the spatio-temporal signal data received by detectors at different locations in the brain, how to construct spatio-temporal information representations of EEG signals to facilitate future trend prediction for multi-channel EEG becomes an important problem. This study proposes a multi-signal prediction algorithm based on generative diffusion models (EEG-DIF), which transforms the multi-signal forecasting task into an image completion task, allowing for comprehensive representation and learning of the spatio-temporal correlations and future developmental patterns of multi-channel EEG signals. Here, we employ a publicly available epilepsy EEG dataset to construct and validate the EEG-DIF. The results demonstrate that our method can accurately predict future trends for multi-channel EEG signals simultaneously. Furthermore, the early warning accuracy for epilepsy seizures based on the generated EEG data reaches 0.89. In general, EEG-DIF provides a novel approach for characterizing multi-channel EEG signals and an innovative early warning algorithm for epilepsy seizures, aiding in optimizing and enhancing the clinical diagnosis process. The code is available at https://github.com/JZK00/EEG-DIF.
- Asia > China > Sichuan Province > Chengdu (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- Research Report > Promising Solution (0.66)
- Research Report > New Finding (0.48)
- Health & Medicine > Therapeutic Area > Neurology > Epilepsy (1.00)
- Health & Medicine > Therapeutic Area > Genetic Disease (1.00)
Radio Signal Classification by Adversarially Robust Quantum Machine Learning
Wu, Yanqiu, Adermann, Eromanga, Thapa, Chandra, Camtepe, Seyit, Suzuki, Hajime, Usman, Muhammad
Radio signal classification plays a pivotal role in identifying the modulation scheme used in received radio signals, which is essential for demodulation and proper interpretation of the transmitted information. Researchers have underscored the high susceptibility of ML algorithms for radio signal classification to adversarial attacks. Such vulnerability could result in severe consequences, including misinterpretation of critical messages, interception of classified information, or disruption of communication channels. Recent advancements in quantum computing have revolutionized theories and implementations of computation, bringing the unprecedented development of Quantum Machine Learning (QML). It is shown that quantum variational classifiers (QVCs) provide notably enhanced robustness against classical adversarial attacks in image classification. However, no research has yet explored whether QML can similarly mitigate adversarial threats in the context of radio signal classification. This work applies QVCs to radio signal classification and studies their robustness to various adversarial attacks. We also propose the novel application of the approximate amplitude encoding (AAE) technique to encode radio signal data efficiently. Our extensive simulation results present that attacks generated on QVCs transfer well to CNN models, indicating that these adversarial examples can fool neural networks that they are not explicitly designed to attack. However, the converse is not true. QVCs primarily resist the attacks generated on CNNs. Overall, with comprehensive simulations, our results shed new light on the growing field of QML by bridging knowledge gaps in QAML in radio signal classification and uncovering the advantages of applying QML methods in practical applications.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Oceania > Australia > Victoria (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Radio Generation Using Generative Adversarial Networks with An Unrolled Design
Wang, Weidong, An, Jiancheng, Liao, Hongshu, Gan, Lu, Yuen, Chau
As a revolutionary generative paradigm of deep learning, generative adversarial networks (GANs) have been widely applied in various fields to synthesize realistic data. However, it is challenging for conventional GANs to synthesize raw signal data, especially in some complex cases. In this paper, we develop a novel GAN framework for radio generation called "Radio GAN". Compared to conventional methods, it benefits from three key improvements. The first is learning based on sampling points, which aims to model an underlying sampling distribution of radio signals. The second is an unrolled generator design, combined with an estimated pure signal distribution as a prior, which can greatly reduce learning difficulty and effectively improve learning precision. Finally, we present an energy-constrained optimization algorithm to achieve better training stability and convergence. Experimental results with extensive simulations demonstrate that our proposed GAN framework can effectively learn transmitter characteristics and various channel effects, thus accurately modeling for an underlying sampling distribution to synthesize radio signals of high quality.
The Rise of Data-Centric AI - DATAVERSITY
Data-centric AI is gaining momentum among engineers. While traditionally, a model-centric approach has been used to improve accuracy for a variety of applications, the increase of data available today and the benefits of using reliable data are leading engineers to reevaluate their priorities and workflows. With a model's performance so dependent on the quality of the data it is being trained with, this data focus has empowered engineers to improve model accuracy without the circular process of constantly tweaking parameters. By improving data quality and model accuracy, data-centric AI allows for new areas of application and opens new opportunities in the field of engineering – from 5G communications to LiDAR, medical device imaging, state of charge estimations, and many more. While careful data examination has always proven critical to successful modeling, the modern challenge lies in determining how data-centric AI should advance to solve specific application problems, and what techniques and tools are available to do so.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Quality (0.76)
The Rise of Data-Centric AI - DATAVERSITY
Data-centric AI is gaining momentum among engineers. While traditionally, a model-centric approach has been used to improve accuracy for a variety of applications, the increase of data available today and the benefits of using reliable data are leading engineers to reevaluate their priorities and workflows. With a model's performance so dependent on the quality of the data it is being trained with, this data focus has empowered engineers to improve model accuracy without the circular process of constantly tweaking parameters. By improving data quality and model accuracy, data-centric AI allows for new areas of application and opens new opportunities in the field of engineering – from 5G communications to LiDAR, medical device imaging, state of charge estimations, and many more. Get our weekly newsletter in your inbox with the latest Data Management articles, webinars, events, online courses, and more.
- Health & Medicine (0.77)
- Education > Educational Setting > Online (0.77)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Quality (0.76)
Learning similarities between biomedical signals with Deep Siamese Network
Today, I will walk you through the electrocardiogram (ECG) biomedical signal data with the aim of learning similarity representations between the two recorded signal data events. ECG is one of the most commonly heard types of signal data in context to human medical recordings. So, let's first simply understand what exactly is "Signal" in layman terms, what is an ECG signal, why is it needed, what exactly is Siamese Neural Network, how it can be useful towards comparing two vectors, and finally we will see an use-case starting with the ECG data analysis including uni/multivariate plotting, rolling window sum plots, data profiling, filtering outliers, detecting r-signal-to-signal peaks, and finally identifying the ECG signal similarities with Siamese Network model. The fundamental quantity of representing some information is called a "signal" in simple engineering terms. While in context to mathematical world, a signal is just a function that simply conveys some information, where the information could be a function of time [y y (t) ] or it could be function of spatial coordinates [ y y (x, y) ] or it could be a function of distance from source [ y y (r) ], etc. as an example.
The secret to AI success? Focusing on data preparation
Datasets are essential to AI models. They provide the truth by which we train AI models and measure a model's success. Engineers often look to the AI model as the key to delivering highly accurate results, but in reality it is often the data that determines an AI model success. Data flows through every step of the AI workflow, from model training to deployment, and the way it is prepared can be the main driver of accuracy when designing robust AI models. Engineers can use these five tips to improve their data preparation process and drive success when developing a complete AI system.